Abstraction: There is a graph, with nodes and edges. Abstraction involves focusing on what is important and ignoring the rest - for example, we have not shown how much each tourist attraction cost for entry.
Generalization: Both problems are similar problems - ask students in what respect. Both problems involved going from one point to all other points and coming back to the original problem.
Algorithm: The method to solve the problem. How did you go about traversing the graph? For this kind of the problem, depth first is an intuitive route - you go down a path as far as you can and you backtrack if its the wrong path and try another one.
Pattern Matching: It seems that we can solve any "route finding" problem by drawing a graph for that - this is pattern matching.
Relate to the above in context of the knight's tour, tour guide and circular numbers discussed last week.
Directory Search
You have a telephone directory and you want to search for a particular name in there (for example, "Alok Mittal") - how would you do that?
If you had to search for "Zach Burns", how would you do that?
Can you think about a common method regardless of what name you are searching?
Students may come up with a linear approach of searching through the directory (actually is better to search for "Alok Mittal" starting from front, and for "Zach Burns" starting from back). Some students may have heard of the binary approach. In either case, ask students what would happen if the names were randomly arranged (not in sequential/ sorted order).
Starting from that baseline, get students to recognize that the sorted order is a special property and can be very useful for search. Binary search is an example of interpolation - a "guess" that halves the search space in each step, unlike the random model reduces the search space by 1 in each step.
Does your method work if the name is not in the directory?
Can you think about a method that does not work if the name is not in the directory? "Random search" - why would you do that?
Guess Who?
Play the game of Guess Who with some kids. Ask others about the pattern of guesses that one is making, and comment on whether they are "good guesses" or not. What qualifies a "good guess" - let us say that we want to be able to guess in minimum number of steps in most cases (not just if we get lucky).
Take some example of "bad guesses" - guesses which eliminate only one or two people. Note that these guesses could have got lucky, but in the worst case they can take too many guesses. One can also prove that in an average case, this approach is not good (Challenge kids to think about why. For 64 people, what is the average number of guesses if we reduce the search space by 1 each time?)
Get kids to propose "good guesses". The idea again is to reduce the search space significantly and predictably in each step - half it. How many steps would this approach take on an average with 64 people? In worst case?
Fence the Lion
Draw a big square on the chalkboard and label it "the desert." Tell the students that there is a lion loose in the desert, and it's our job to build a 1mx1m fence around it so it doesn't hurt anyone. We don't want to catch it any other way, since it's a very cranky lion. How should we do it? (We've assumed the lion can't get out of the desert.)
Again, one strategy can be to keep dropping 1mx1m fences at random and hope it catches the lion. The other is to build a fence dividing the desert into two halves. The lion is now in one of the halves. Next, build a fence across the half containing the lion, constraining the lion to one quarter of the desert. Continue building fences across the half of the remaining desert containing the lion, until the lion is cornered.
Applying Computational Thinking to the problems above
What was common amongst the three problems above? What was different? Can you generalize the three problems?
Note that generalization has an element of "search" as a problem, but also the ability to partition the search space into half at each step. This context of the problem is an important part of generalization. For example, we could not have used the same approach if the dictionary was in random order
What can we abstract out and what is central to the problems? Does it matter what the face on Guess Who card are? (Think again). Does it matter that its a lion?
What is the algorithm?
What is the pattern? "If you see a problem that requires finding something and the search space can be partitioned, use binary search"
What if the search space can be divided into three parts in one step? (Ternary search"
Can you think another example of a game that can use this pattern? Example: 20 questions. How many personalities can you guess in a perfect 20 questions game?
Homework:
If sorting can be so important to be able to search, how do we get to "sorted" from "unsorted" in the first place? So if you were given a 1000 dictionary works in a random order, how would you arrange them in a sorted order?